The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] emotion recognition(30hit)

21-30hit(30hit)

  • A Salient Feature Extraction Algorithm for Speech Emotion Recognition

    Ruiyu LIANG  Huawei TAO  Guichen TANG  Qingyun WANG  Li ZHAO  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/05/29
      Vol:
    E98-D No:9
      Page(s):
    1715-1718

    A salient feature extraction algorithm is proposed to improve the recognition rate of the speech emotion. Firstly, the spectrogram of the emotional speech is calculated. Secondly, imitating the selective attention mechanism, the color, direction and brightness map of the spectrogram is computed. Each map is normalized and down-sampled to form the low resolution feature matrix. Then, each feature matrix is converted to the row vector and the principal component analysis (PCA) is used to reduce features redundancy to make the subsequent classification algorithm more practical. Finally, the speech emotion is classified with the support vector machine. Compared with the tradition features, the improved recognition rate reaches 15%.

  • Speech Emotion Recognition Based on Sparse Transfer Learning Method

    Peng SONG  Wenming ZHENG  Ruiyu LIANG  

     
    LETTER-Speech and Hearing

      Pubricized:
    2015/04/10
      Vol:
    E98-D No:7
      Page(s):
    1409-1412

    In traditional speech emotion recognition systems, when the training and testing utterances are obtained from different corpora, the recognition rates will decrease dramatically. To tackle this problem, in this letter, inspired from the recent developments of sparse coding and transfer learning, a novel sparse transfer learning method is presented for speech emotion recognition. Firstly, a sparse coding algorithm is employed to learn a robust sparse representation of emotional features. Then, a novel sparse transfer learning approach is presented, where the distance between the feature distributions of source and target datasets is considered and used to regularize the objective function of sparse coding. The experimental results demonstrate that, compared with the automatic recognition approach, the proposed method achieves promising improvements on recognition rates and significantly outperforms the classic dimension reduction based transfer learning approach.

  • Speech Emotion Recognition Using Transfer Learning

    Peng SONG  Yun JIN  Li ZHAO  Minghai XIN  

     
    LETTER-Speech and Hearing

      Vol:
    E97-D No:9
      Page(s):
    2530-2532

    A major challenge for speech emotion recognition is that when the training and deployment conditions do not use the same speech corpus, the recognition rates will obviously drop. Transfer learning, which has successfully addressed the cross-domain classification or recognition problem, is presented for cross-corpus speech emotion recognition. First, by using the maximum mean discrepancy embedding (MMDE) optimization and dimension reduction algorithms, two close low-dimensional feature spaces are obtained for source and target speech corpora, respectively. Then, a classifier function is trained using the learned low-dimensional features in the labeled source corpus, and directly applied to the unlabeled target corpus for emotion label recognition. Experimental results demonstrate that the transfer learning method can significantly outperform the traditional automatic recognition technique for cross-corpus speech emotion recognition.

  • Integrating Facial Expression and Body Gesture in Videos for Emotion Recognition

    Jingjie YAN  Wenming ZHENG  Minhai XIN  Jingwei YAN  

     
    LETTER-Pattern Recognition

      Vol:
    E97-D No:3
      Page(s):
    610-613

    In this letter, we research the method of using face and gesture image sequences to deal with the video-based bimodal emotion recognition problem, in which both Harris plus cuboids spatio-temporal feature (HST) and sparse canonical correlation analysis (SCCA) fusion method are applied to this end. To efficaciously pick up the spatio-temporal features, we adopt the Harris 3D feature detector proposed by Laptev and Lindeberg to find the points from both face and gesture videos, and then apply the cuboids feature descriptor to extract the facial expression and gesture emotion features [1],[2]. To further extract the common emotion features from both facial expression feature set and gesture feature set, the SCCA method is applied and the extracted emotion features are used for the biomodal emotion classification, where the K-nearest neighbor classifier and the SVM classifier are respectively used for this purpose. We test this method on the biomodal face and body gesture (FABO) database and the experimental results demonstrate the better recognition accuracy compared with other methods.

  • Personalized Emotion Recognition Considering Situational Information and Time Variance of Emotion

    Yong-Soo SEOL  Han-Woo KIM  

     
    PAPER-Human-computer Interaction

      Vol:
    E96-D No:11
      Page(s):
    2409-2416

    To understand human emotion, it is necessary to be aware of the surrounding situation and individual personalities. In most previous studies, however, these important aspects were not considered. Emotion recognition has been considered as a classification problem. In this paper, we attempt new approaches to utilize a person's situational information and personality for use in understanding emotion. We propose a method of extracting situational information and building a personalized emotion model for reflecting the personality of each character in the text. To extract and utilize situational information, we propose a situation model using lexical and syntactic information. In addition, to reflect the personality of an individual, we propose a personalized emotion model using KBANN (Knowledge-based Artificial Neural Network). Our proposed system has the advantage of using a traditional keyword-spotting algorithm. In addition, we also reflect the fact that the strength of emotion decreases over time. Experimental results show that the proposed system can more accurately and intelligently recognize a person's emotion than previous methods.

  • A Hybrid Speech Emotion Recognition System Based on Spectral and Prosodic Features

    Yu ZHOU  Junfeng LI  Yanqing SUN  Jianping ZHANG  Yonghong YAN  Masato AKAGI  

     
    PAPER-Human-computer Interaction

      Vol:
    E93-D No:10
      Page(s):
    2813-2821

    In this paper, we present a hybrid speech emotion recognition system exploiting both spectral and prosodic features in speech. For capturing the emotional information in the spectral domain, we propose a new spectral feature extraction method by applying a novel non-uniform subband processing, instead of the mel-frequency subbands used in Mel-Frequency Cepstral Coefficients (MFCC). For prosodic features, a set of features that are closely correlated with speech emotional states are selected. In the proposed hybrid emotion recognition system, due to the inherently different characteristics of these two kinds of features (e.g., data size), the newly extracted spectral features are modeled by Gaussian Mixture Model (GMM) and the selected prosodic features are modeled by Support Vector Machine (SVM). The final result of the proposed emotion recognition system is obtained by combining the results from these two subsystems. Experimental results show that (1) the proposed non-uniform spectral features are more effective than the traditional MFCC features for emotion recognition; (2) the proposed hybrid emotion recognition system using both spectral and prosodic features yields the relative recognition error reduction rate of 17.0% over the traditional recognition systems using only the spectral features, and 62.3% over those using only the prosodic features.

  • Intentional Voice Command Detection for Trigger-Free Speech Interface

    Yasunari OBUCHI  Takashi SUMIYOSHI  

     
    PAPER-Robust Speech Recognition

      Vol:
    E93-D No:9
      Page(s):
    2440-2450

    In this paper we introduce a new framework of audio processing, which is essential to achieve a trigger-free speech interface for home appliances. If the speech interface works continually in real environments, it must extract occasional voice commands and reject everything else. It is extremely important to reduce the number of false alarms because the number of irrelevant inputs is much larger than the number of voice commands even for heavy users of appliances. The framework, called Intentional Voice Command Detection, is based on voice activity detection, but enhanced by various speech/audio processing techniques such as emotion recognition. The effectiveness of the proposed framework is evaluated using a newly-collected large-scale corpus. The advantages of combining various features were tested and confirmed, and the simple LDA-based classifier demonstrated acceptable performance. The effectiveness of various methods of user adaptation is also discussed.

  • A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Takashi NOSE  Takao KOBAYASHI  

     
    PAPER-Speech and Hearing

      Vol:
    E93-D No:1
      Page(s):
    116-124

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  • Use of Multimodal Information in Facial Emotion Recognition

    Liyanage C. DE SILVA  Tsutomu MIYASATO  Ryohei NAKATSU  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:1
      Page(s):
    105-114

    Detection of facial emotions are mainly addressed by computer vision researchers based on facial display. Also detection of vocal expressions of emotions is found in research work done by acoustic researchers. Most of these research paradigms are devoted purely to visual or purely to auditory human emotion detection. However we found that it is very interesting to consider both of these auditory and visual informations together, for processing, since we hope this kind of multimodal information processing will become a datum of information processing in future multimedia era. By several intensive subjective evaluation studies we found that human beings recognize Anger, happiness, Surprise and Dislike by their visual appearance, compared to voice only detection. When the audio track of each emotion clip is dubbed with a different type of auditory emotional expression, still Anger, Happiness and Surprise were video dominant. However Dislike emotion gave mixed responses to different speakers. In both studies we found that Sadness and Fear emotions were audio dominant. As a conclusion to the paper we propose a method of facial emotion detection by using a hybrid approach, which uses multimodal informations for facial emotion recognition.

  • Emotion Enhanced Face to Face Meetings Using the Concept of Virtual Space Teleconferencing

    Liyanage C. DE SILVA  Tsutomu MIYASATO  Fumio KISHINO  

     
    PAPER

      Vol:
    E79-D No:6
      Page(s):
    772-780

    Here we investigate the unique advantages of our proposed Virtual Space Teleconferencing System (VST) in the area of multimedia teleconferencing, with emphasis to facial emotion transmission and recognition. Specially, we show that this concept can be used in a unique way of communication in which the emotions of the local participant are transmitted to the remote party with higher recognition rate by enhancing the emotions using some intelligence processing in between the local and the remote participants. In other words, we can show that this kind of emotion enhanced teleconferencing systems can supersede face to face meetings, by effectively alleviating the barriers in recognizing emotions between different nations. Also in this paper we show that it is better alternative to the blurred or mosaiced facial images that one can find in some television interviews with people who are not willing to be exposed in public.

21-30hit(30hit)